setup time
An Aligned Constraint Programming Model For Serial Batch Scheduling With Minimum Batch Size
Huertas, Jorge A., Van Hentenryck, Pascal
Abstract--In serial batch (s-batch) scheduling, jobs from similar families are grouped into batches and processed sequentially to avoid repetitive setups that are required when processing consecutive jobs of different families. Despite its large success in scheduling, only three Constraint Programming (CP) models have been proposed for this problem considering minimum batch sizes, which is a common requirement in many practical settings, including the ion implantation area in semiconductor manufacturing. These existing CP models rely on a predefined virtual set of possible batches that suffers from the curse of dimensionality and adds complexity to the problem. This paper proposes a novel CP model that does not rely on this virtual set. Instead, it uses key alignment parameters that allow it to reason directly on the sequences of same-family jobs scheduled on the machines, resulting in a more compact formulation. This new model is further improved by exploiting the problem's structure with tailored search phases and strengthened inference levels of the constraint propagators. The extensive computational experiments on nearly five thousand instances compare the proposed models against existing methods in the literature, including mixed-integer programming formulations, tabu search meta-heuristics, and CP approaches. The results demonstrate the superiority of the proposed models on small-to-medium instances with up to 100 jobs, and their ability to find solutions up to 25% better than the ones produces by existing methods on large-scale instances with up to 500 jobs, 10 families, and 10 machines. In the current and highly competitive landscape of the manufacturing industry, companies are under growing pressure to minimize production costs and reduce cycle times. A crucial approach to achieving these goals and boosting production efficiency is processing multiple similar jobs in groups called batches [1]. Two types of batching can be distinguished in the scheduling literature, depending on how the jobs are processed inside their batch: (i) parallel batching (p-batch), where jobs inside a batch are processed in parallel at the same time [2]; and (ii) serial batching (s-batch), where jobs inside a batch are processed sequentially, one after the other [3]. The benefits of p-batching in the manufacturing industry are straightforward due to the parallelized processing of the jobs inside a batch. In contrast, the benefits of s-batching usually come from grouping jobs that require similar machine configurations to avoid repetitive setups [4].
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- Europe > Ukraine (0.04)
- Europe > Switzerland (0.04)
- (3 more...)
- Semiconductors & Electronics (1.00)
- Information Technology > Hardware (0.35)
Constraint Programming Models For Serial Batch Scheduling With Minimum Batch Size
Huertas, Jorge A., Van Hentenryck, Pascal
In serial batch (s-batch) scheduling, jobs are grouped in batches and processed sequentially within their batch. This paper considers multiple parallel machines, nonidentical job weights and release times, and sequence-dependent setup times between batches of different families. Although s-batch has been widely studied in the literature, very few papers have taken into account a minimum batch size, typical in practical settings such as semiconductor manufacturing and the metal industry. The problem with this minimum batch size requirement has been mostly tackled with dynamic programming and meta-heuristics, and no article has ever used constraint programming (CP) to do so. This paper fills this gap by proposing, three CP models for s-batching with minimum batch size: (i) an Interval Assignment model that computes and bounds the size of the batches using the presence literals of interval variables of the jobs. The computational experiments on standard cases compare the three CP models with two existing mixed-integer programming (MIP) models from the literature. The results demonstrate the versatility of the proposed CP models to handle multiple variations of s-batching; and their ability to produce, in large instances, better solutions than the MIP models faster. Introduction In the current and highly competitive landscape of the manufacturing industry, companies are under growing pressure to minimize production costs and reduce cycle times. One effective strategy to improve efficiency is to process similar tasks, called jobs, together in groups known as batches [1]. There are two main ways to process these batches. In parallel batching (p-batch), all jobs in a batch are processed simultaneously [2]. In contrast, in serial batching (s-batch), jobs in a batch are processed sequentially one after another [3]. The benefits of p-batching are obvious since it saves time by processing multiple jobs at once. Similarly, s-batching is especially useful when grouping similar jobs can prevent repetitive machine setups, which are time-consuming and costly [4]. Serial batching appears in many industries, including metal processing [5], additive manufacturing (3D printing) [5, 6], paint [7] and pharmaceutical production [8], chemical manufacturing [9], and semiconductor manufacturing [10, 11].
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- Asia > Middle East > Jordan (0.04)
- Semiconductors & Electronics (0.95)
- Materials > Chemicals (0.54)
- Machinery > Industrial Machinery (0.54)
- (2 more...)
Automated Planning for Optimal Data Pipeline Instantiation
Amado, Leonardo Rosa, Vogel, Adriano, Griebler, Dalvan, Licks, Gabriel Paludo, Simon, Eric, Meneguzzi, Felipe
Data pipeline frameworks provide abstractions for implementing sequences of data-intensive transformation operators, automating the deployment and execution of such transformations in a cluster. Deploying a data pipeline, however, requires computing resources to be allocated in a data center, ideally minimizing the overhead for communicating data and executing operators in the pipeline while considering each operator's execution requirements. In this paper, we model the problem of optimal data pipeline deployment as planning with action costs, where we propose heuristics aiming to minimize total execution time. Experimental results indicate that the heuristics can outperform the baseline deployment and that a heuristic based on connections outperforms other strategies.
- South America > Brazil > Rio Grande do Sul (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (4 more...)
Discrete Differential Evolution Particle Swarm Optimization Algorithm for Energy Saving Flexible Job Shop Scheduling Problem Considering Machine Multi States
Wang, Da, Zhang, Yu, Zhang, Kai, Li, Junqing, Li, Dengwang
As the continuous deepening of low-carbon emission reduction policies, the manufacturing industries urgently need sensible energy-saving scheduling schemes to achieve the balance between improving production efficiency and reducing energy consumption. In energy-saving scheduling, reasonable machine states-switching is a key point to achieve expected goals, i.e., whether the machines need to switch speed between different operations, and whether the machines need to add extra setup time between different jobs. Regarding this matter, this work proposes a novel machine multi states-based energy saving flexible job scheduling problem (EFJSP-M), which simultaneously takes into account machine multi speeds and setup time. To address the proposed EFJSP-M, a kind of discrete differential evolution particle swarm optimization algorithm (D-DEPSO) is designed. In specific, D-DEPSO includes a hybrid initialization strategy to improve the initial population performance, an updating mechanism embedded with differential evolution operators to enhance population diversity, and a critical path variable neighborhood search strategy to expand the solution space. At last, based on datasets DPs and MKs, the experiment results compared with five state-of-the-art algorithms demonstrate the feasible of EFJSP-M and the superior of D-DEPSO.
Exploring Multi-Agent Reinforcement Learning for Unrelated Parallel Machine Scheduling
Zampella, Maria, Otamendi, Urtzi, Belaunzaran, Xabier, Artetxe, Arkaitz, Olaizola, Igor G., Longo, Giuseppe, Sierra, Basilio
Scheduling problems pose significant challenges in resource, industry, and operational management. This paper addresses the Unrelated Parallel Machine Scheduling Problem (UPMS) with setup times and resources using a Multi-Agent Reinforcement Learning (MARL) approach. The study introduces the Reinforcement Learning environment and conducts empirical analyses, comparing MARL with Single-Agent algorithms. The experiments employ various deep neural network policies for single- and Multi-Agent approaches. Results demonstrate the efficacy of the Maskable extension of the Proximal Policy Optimization (PPO) algorithm in Single-Agent scenarios and the Multi-Agent PPO algorithm in Multi-Agent setups. While Single-Agent algorithms perform adequately in reduced scenarios, Multi-Agent approaches reveal challenges in cooperative learning but a scalable capacity. This research contributes insights into applying MARL techniques to scheduling optimization, emphasizing the need for algorithmic sophistication balanced with scalability for intelligent scheduling solutions.
- Asia > Middle East > Jordan (0.04)
- Europe > Spain > Basque Country (0.04)
- Europe > Italy (0.04)
- Research Report (1.00)
- Overview (0.68)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
Optimizing Job Shop Scheduling in the Furniture Industry: A Reinforcement Learning Approach Considering Machine Setup, Batch Variability, and Intralogistics
Schneevogt, Malte, Binninger, Karsten, Klarmann, Noah
This paper explores the potential application of Deep Reinforcement Learning in the furniture industry. To offer a broad product portfolio, most furniture manufacturers are organized as a job shop, which ultimately results in the Job Shop Scheduling Problem (JSSP). The JSSP is addressed with a focus on extending traditional models to better represent the complexities of real-world production environments. Existing approaches frequently fail to consider critical factors such as machine setup times or varying batch sizes. A concept for a model is proposed that provides a higher level of information detail to enhance scheduling accuracy and efficiency. The concept introduces the integration of DRL for production planning, particularly suited to batch production industries such as the furniture industry. The model extends traditional approaches to JSSPs by including job volumes, buffer management, transportation times, and machine setup times. This enables more precise forecasting and analysis of production flows and processes, accommodating the variability and complexity inherent in real-world manufacturing processes. The RL agent learns to optimize scheduling decisions. It operates within a discrete action space, making decisions based on detailed observations. A reward function guides the agent's decision-making process, thereby promoting efficient scheduling and meeting production deadlines. Two integration strategies for implementing the RL agent are discussed: episodic planning, which is suitable for low-automation environments, and continuous planning, which is ideal for highly automated plants. While episodic planning can be employed as a standalone solution, the continuous planning approach necessitates the integration of the agent with ERP and Manufacturing Execution Systems. This integration enables real-time adjustments to production schedules based on dynamic changes.
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
A mathematical model for simultaneous personnel shift planning and unrelated parallel machine scheduling
Khadivi, Maziyar, Abbasi, Mostafa, Charter, Todd, Najjaran, Homayoun
This paper addresses a production scheduling problem derived from an industrial use case, focusing on unrelated parallel machine scheduling with the personnel availability constraint. The proposed model optimizes the production plan over a multi-period scheduling horizon, accommodating variations in personnel shift hours within each time period. It assumes shared personnel among machines, with one personnel required per machine for setup and supervision during job processing. Available personnel are fewer than the machines, thus limiting the number of machines that can operate in parallel. The model aims to minimize the total production time considering machine-dependent processing times and sequence-dependent setup times. The model handles practical scenarios like machine eligibility constraints and production time windows. A Mixed Integer Linear Programming (MILP) model is introduced to formulate the problem, taking into account both continuous and district variables. A two-step solution approach enhances computational speed, first maximizing accepted jobs and then minimizing production time. Validation with synthetic problem instances and a real industrial case study of a food processing plant demonstrates the performance of the model and its usefulness in personnel shift planning. The findings offer valuable insights for practical managerial decision-making in the context of production scheduling.
- North America > United States (0.04)
- North America > Canada > British Columbia > Vancouver Island > Capital Regional District > Victoria (0.04)
- Europe > Germany (0.04)
- Food & Agriculture (0.48)
- Energy (0.46)
A Preconditioned Interior Point Method for Support Vector Machines Using an ANOVA-Decomposition and NFFT-Based Matrix-Vector Products
Wagner, Theresa, Pearson, John W., Stoll, Martin
The training of support vector machines (SVMs) leads to large-scale quadratic programs (QPs) [1]. An efficient way to solve these optimization problems is the sequential minimal optimization (SMO) algorithm introduced in [2]. The main motivation for the design of the SMO algorithm comes from the fact that existing optimization methods, i.e., quadratic programming approaches, cannot handle the large-scale dense kernel matrix efficiently. The SMO algorithm is motivated by the result obtained in [3] that showed the optimization problem can be decomposed into the solution of smaller subproblems, which avoids the large-scale dense matrices. When tackling the training as a QP programming task, the use of interior point methods (IPM) has also been studied in the seminal paper of Fine and Scheinberg in [4]: the authors use a low-rank approximation of the kernel matrix and propose a pivoted low-rank Cholesky decomposition to approximate the kernel matrix. A similar matrix approximation was also proposed in [5].
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.04)
- North America > United States > Rhode Island > Providence County > Providence (0.04)
- (5 more...)
A general Framework for Utilizing Metaheuristic Optimization for Sustainable Unrelated Parallel Machine Scheduling: A concise overview
Sustainable development has emerged as a global priority, and industries are increasingly striving to align their operations with sustainable practices. Parallel machine scheduling (PMS) is a critical aspect of production planning that directly impacts resource utilization and operational efficiency. In this paper, we investigate the application of metaheuristic optimization algorithms to address the unrelated parallel machine scheduling problem (UPMSP) through the lens of sustainable development goals (SDGs). The primary objective of this study is to explore how metaheuristic optimization algorithms can contribute to achieving sustainable development goals in the context of UPMSP. We examine a range of metaheuristic algorithms, including genetic algorithms, particle swarm optimization, ant colony optimization, and more, and assess their effectiveness in optimizing the scheduling problem. The algorithms are evaluated based on their ability to improve resource utilization, minimize energy consumption, reduce environmental impact, and promote socially responsible production practices. To conduct a comprehensive analysis, we consider UPMSP instances that incorporate sustainability-related constraints and objectives.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- South America (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- (2 more...)
- Research Report > Promising Solution (1.00)
- Research Report > New Finding (1.00)
- Overview (1.00)
- Research Report > Experimental Study (0.67)
- Social Sector (0.89)
- Energy > Renewable (0.67)
- Law > Environmental Law (0.48)
- Water & Waste Management > Solid Waste Management (0.46)
NLP Reproducibility For All: Understanding Experiences of Beginners
Storks, Shane, Yu, Keunwoo Peter, Ma, Ziqiao, Chai, Joyce
As natural language processing (NLP) has recently seen an unprecedented level of excitement, and more people are eager to enter the field, it is unclear whether current research reproducibility efforts are sufficient for this group of beginners to apply the latest developments. To understand their needs, we conducted a study with 93 students in an introductory NLP course, where students reproduced the results of recent NLP papers. Surprisingly, we find that their programming skill and comprehension of research papers have a limited impact on their effort spent completing the exercise. Instead, we find accessibility efforts by research authors to be the key to success, including complete documentation, better coding practice, and easier access to data files. Going forward, we recommend that NLP researchers pay close attention to these simple aspects of open-sourcing their work, and use insights from beginners' feedback to provide actionable ideas on how to better support them.
- North America > Dominican Republic (0.04)
- Europe > France > Provence-Alpes-Côte d'Azur > Bouches-du-Rhône > Marseille (0.04)
- North America > United States > Michigan > Washtenaw County > Ann Arbor (0.04)
- (2 more...)
- Research Report > New Finding (1.00)
- Instructional Material (0.93)
- Questionnaire & Opinion Survey (0.93)
- Research Report > Experimental Study (0.67)